klotz: prompt engineering* + large language models*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. Security researchers have discovered that LLM chatbots can be made to ignore their guardrails by using prompts with terrible grammar and long, run-on sentences. This bypasses safety filters and allows the model to generate harmful responses. The research introduces the 'refusal-affirmation logit gap' as a metric for assessing model vulnerability.
    2025-08-26 Tags: , by klotz
  2. GitHub Models is a suite of developer tools for AI development, offering features like prompt management, model comparison, and quantitative evaluations, integrated directly into GitHub workflows.
    2025-06-06 Tags: , , , by klotz
  3. A post with pithy observations and clear conclusions from building complex LLM workflows, covering topics like prompt chaining, data structuring, model limitations, and fine-tuning strategies.
  4. This article details new prompting techniques for ChatGPT-4.1, emphasizing structured prompts, precise delimiting, agent creation, long context handling, and chain-of-thought prompting to achieve better results.
  5. This article explains prompt engineering techniques for large language models (LLMs), covering methods like zero-shot, few-shot, system, contextual, role, step-back, chain-of-thought, self-consistency, ReAct, Automatic Prompt Engineering and code prompting. It also details best practices and output configuration for optimal results.
    2025-04-16 Tags: , by klotz
  6. This article details an iterative process of using ChatGPT to explore the parallels between Marvin Minsky's "Society of Mind" and Anthropic's research on Large Language Models, specifically Claude Haiku. The user experimented with different prompts to refine the AI's output, navigating issues like model confusion (GPT-2 vs. Claude) and overly conversational tone. Ultimately, prompting the AI with direct source materials (Minsky’s books and Anthropic's paper) yielded the most insightful analysis, highlighting potential connections like the concept of "A and B brains" within both frameworks.
  7. A guide on implementing prompt engineering patterns to make RAG implementations more effective and efficient, covering patterns like Direct Retrieval, Chain of Thought, Context Enrichment, Instruction-Tuning, and more.
    2025-02-27 Tags: , , by klotz
  8. The article discusses how structured, modular software engineering practices enhance the effectiveness of large language models (LLMs) in software development tasks. It emphasizes the importance of clear and coherent code, which allows LLMs to better understand, extend functionality, and debug. The author shares experiences from the Bad Science Fiction project, illustrating how well-engineered code improves AI collaboration.

    Key takeaways:
    1. **Modular Code**: Use small, well-documented code blocks to aid LLM performance.
    2. **Effective Prompts**: Design clear, structured prompts by defining context and refining iteratively.
    3. **Chain-of-Thought Models**: Provide precise inputs to leverage structured problem-solving abilities.
    4. **Prompt Literacy**: Master expressing computational intent clearly in natural language.
    5. **Iterative Refinement**: Utilize AI consultants for continuous code improvement.
    6. **Separation of Concerns**: Organize code into server and client roles for better AI interaction.
  9. The article explains six essential strategies for customizing Large Language Models (LLMs) to better meet specific business needs or domain requirements. These strategies include Prompt Engineering, Decoding and Sampling Strategy, Retrieval Augmented Generation (RAG), Agent, Fine-Tuning, and Reinforcement Learning from Human Feedback (RLHF). Each strategy is described with its benefits, limitations, and implementation approaches to align LLMs with specific objectives.
    2025-02-25 Tags: , , , , , by klotz
  10. The article discusses the rise of prompt engineering as a discipline for tuning prompts to interact with large language models (LLMs) effectively. It addresses the challenges of curating and maintaining a high-quality prompt store, highlighting the difficulties due to overlapping prompts. It uses content writing as an example to illustrate the need for a systematic approach to retrieving optimal prompts.
    2025-02-23 Tags: , , by klotz

Top of the page

First / Previous / Next / Last / Page 2 of 0 SemanticScuttle - klotz.me: Tags: prompt engineering + large language models

About - Propulsed by SemanticScuttle